As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
我们分析了学习型号(如神经网络)本身是优化器时发生的学习优化的类型 - 我们将作为MESA优化的情况,我们在本文中介绍的新闻。我们认为,MESA优化的可能性为先进机器学习系统的安全和透明度提出了两个重要问题。首先,在什么情况下学习模型是优化的,包括当他们不应该?其次,当学习模型是优化器时,它的目标是什么 - 它将如何与损失函数不同,它训练的损失 - 并且如何对齐?在本文中,我们对这两个主要问题进行了深入的分析,并提供了未来研究的主题概述。
translated by 谷歌翻译
The demand of high-resolution video contents has grown over the years. However, the delivery of high-resolution video is constrained by either computational resources required for rendering or network bandwidth for remote transmission. To remedy this limitation, we leverage the eye trackers found alongside existing augmented and virtual reality headsets. We propose the application of video super-resolution (VSR) technique to fuse low-resolution context with regional high-resolution context for resource-constrained consumption of high-resolution content without perceivable drop in quality. Eye trackers provide us the gaze direction of a user, aiding us in the extraction of the regional high-resolution context. As only pixels that falls within the gaze region can be resolved by the human eye, a large amount of the delivered content is redundant as we can't perceive the difference in quality of the region beyond the observed region. To generate a visually pleasing frame from the fusion of high-resolution region and low-resolution region, we study the capability of a deep neural network of transferring the context of the observed region to other regions (low-resolution) of the current and future frames. We label this task a Foveated Video Super-Resolution (FVSR), as we need to super-resolve the low-resolution regions of current and future frames through the fusion of pixels from the gaze region. We propose Cross-Resolution Flow Propagation (CRFP) for FVSR. We train and evaluate CRFP on REDS dataset on the task of 8x FVSR, i.e. a combination of 8x VSR and the fusion of foveated region. Departing from the conventional evaluation of per frame quality using SSIM or PSNR, we propose the evaluation of past foveated region, measuring the capability of a model to leverage the noise present in eye trackers during FVSR. Code is made available at https://github.com/eugenelet/CRFP.
translated by 谷歌翻译
Automated slicing aims to identify subsets of evaluation data where a trained model performs anomalously. This is an important problem for machine learning pipelines in production since it plays a key role in model debugging and comparison, as well as the diagnosis of fairness issues. Scalability has become a critical requirement for any automated slicing system due to the large search space of possible slices and the growing scale of data. We present Autoslicer, a scalable system that searches for problematic slices through distributed metric computation and hypothesis testing. We develop an efficient strategy that reduces the search space through pruning and prioritization. In the experiments, we show that our search strategy finds most of the anomalous slices by inspecting a small portion of the search space.
translated by 谷歌翻译
Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks. Skill learning offers one way of identifying these regularities by decomposing pre-collected experiences into a sequence of skills. A popular approach to skill learning is maximizing the likelihood of the pre-collected experience with latent variable models, where the latent variables represent the skills. However, there are often many solutions that maximize the likelihood equally well, including degenerate solutions. To address this underspecification, we propose a new objective that combines the maximum likelihood objective with a penalty on the description length of the skills. This penalty incentivizes the skills to maximally extract common structures from the experiences. Empirically, our objective learns skills that solve downstream tasks in fewer samples compared to skills learned from only maximizing likelihood. Further, while most prior works in the offline multi-task setting focus on tasks with low-dimensional observations, our objective can scale to challenging tasks with high-dimensional image observations.
translated by 谷歌翻译
In this work, we identify elements of effective machine learning datasets in astronomy and present suggestions for their design and creation. Machine learning has become an increasingly important tool for analyzing and understanding the large-scale flood of data in astronomy. To take advantage of these tools, datasets are required for training and testing. However, building machine learning datasets for astronomy can be challenging. Astronomical data is collected from instruments built to explore science questions in a traditional fashion rather than to conduct machine learning. Thus, it is often the case that raw data, or even downstream processed data is not in a form amenable to machine learning. We explore the construction of machine learning datasets and we ask: what elements define effective machine learning datasets? We define effective machine learning datasets in astronomy to be formed with well-defined data points, structure, and metadata. We discuss why these elements are important for astronomical applications and ways to put them in practice. We posit that these qualities not only make the data suitable for machine learning, they also help to foster usable, reusable, and replicable science practices.
translated by 谷歌翻译
Intentionally crafted adversarial samples have effectively exploited weaknesses in deep neural networks. A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample such that its corresponding model output changes. These sensitivity attacks exploit the model's sensitivity toward task-irrelevant features. Another form of adversarial sample can be crafted via invariance attacks, which exploit the model underestimating the importance of relevant features. Previous literature has indicated a tradeoff in defending against both attack types within a strictly L_p bounded defense. To promote robustness toward both types of attacks beyond Euclidean distance metrics, we use metric learning to frame adversarial regularization as an optimal transport problem. Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
translated by 谷歌翻译
我们提供了最大的公开词典,其中包括贝叶斯改进的姓氏地理编码(BISG),以归纳种族和种族的目的。词典基于六个南部州的选民档案,这些档案是在选民注册后收集自我报告的种族数据的。我们的数据涵盖了比任何可比数据集更大的名称范围,其中包含大约100万个名字,110万个中间名和140万个姓氏。个人被归类为五个相互排斥的种族和种族 - 白人,黑人,西班牙裔,亚洲和其他种族 - 每个词典中的每个名称都为种族/种族计数提供了名称。然后可以按列表或列的标准化计数,以获取给定名称或名称的种族的条件概率。然后可以将这些条件概率部署在数据分析任务中,以实现真相和种族数据的基础分析任务。
translated by 谷歌翻译
生成对抗网络(GAN)是用于复杂数据生成建模的广泛使用的工具。尽管取得了经验成功,但由于发电机和鉴别器的最低最大优化,对GAN的训练尚未完全理解。本文分析了这些关节动力学时,当真实样品以及生成的样品是离散的,有限的集合,并且鉴别器基于内核。引入了一个简单而表达的框架,用于分析培训,称为$ \ textit {隔离点模型} $。在提出的模型中,真实样品之间的距离大大超过了内核宽度,因此每个生成的点都受到最多一个真实点的影响。我们的模型可以精确地表征好和不良最小值的收敛条件。特别是,分析解释了两种常见的故障模式:(i)近似模式崩溃和(ii)差异。提供了可预测复制这些行为的数值模拟。
translated by 谷歌翻译
在面部识别领域,一方面猕猴神经生理学与人类电生理学之间存在令人困惑的时序差异。猕猴中的单个单位记录已显示出100毫秒刺激发作以内的外部视觉皮层中的面部身份特定响应。但是,在人类的脑电图和梅格实验中,据报道,与不熟悉和熟悉的面孔相对应的神经活动之间存在一致的区别,大约在250毫秒内出现。这表明可能存在迄今未发现的人类电生理痕迹的面部熟悉感的早期相关性。我们在这里报告了使用模式分类技术在密集的MEG录音中成功搜索这种相关性。我们的分析表明,早在刺激发作后85毫秒内,面部熟悉程度的标记。图像的低级属性(例如亮度和颜色分布)无法解释这种早期新兴响应差异。这些结果有助于调和人类和猕猴的数据,并提供有关熟悉面部感知的神经机制的线索。
translated by 谷歌翻译